59 research outputs found

    On Dependability in Distributed Databases

    Full text link
    Distributed database availability, reliability, and mean transaction completion time are derived for repairable database systems in which each component is continuously available for repair. Reliability is the probability that the entire transaction can execute properly without failure. It is computed as a function of mean time to failure (MTTF) and mean time to repair (MTTR). Tradeoffs between distributed database query and update are derived in terms of both performance and reliability.http://deepblue.lib.umich.edu/bitstream/2027.42/107965/1/citi-tr-92-9.pd

    Performance Modeling of the PeopleSoft Multi-Tier Remote Computing Architecture

    Full text link
    Complex client-server configurations being designed today require a new and closely coordinated approach to analytic modeling and measurement. A closed queuing network model for a two-tiered PeopleSoft 6 client-server system with an Oracle database server is demonstrated using a new performance modeling tool that applies mean value analysis. The focus of this work is on the measurement and modeling of the PeopleSoft architecture to provide useful capacity planning insights for an actual large-scale university-wide deployment. A testbed and database exerciser are then developed to measure model parameters and perform the initial validation tests. The testbed also provides preliminary test data on a proposed three-tiered deployment architecture that includes the Citrix WinFrame environment as an intermediate level between the client and the Oracle server.http://deepblue.lib.umich.edu/bitstream/2027.42/107929/1/citi-tr-97-5.pd

    On the Performance of Copying Large Files Across a Contention-Based Network

    Full text link
    Analytical and simulation models of interconnected local area networks, because of the large scale involved, are often constrained to represent only the most ideal of conditions for tractability sake. Consequently, many of the important causes of network delay are not accounted for. In this study, experimental evidence is presented to show how delay time in local area networks is significantly affected by hardware limitations in the connected workstations, software overhead, and network contention. The mechanism is a controlled experiment with two Vax workstations over an Ethernet. We investigate the network delays for large file transfers, taking into account the Vax workstation disk transfer limitations; generalized file transfer software such as NFS, FTP, and rcp; and the effect of contention on this simple network by the introduction of substantial workload from competing workstations. A comparison is made between the experimental data and a network modeling tool, and the limitations of the tool are explained. Insights from these experiments have increased our understanding of how more complex networks are likely to perform under heavy workloads.http://deepblue.lib.umich.edu/bitstream/2027.42/107873/1/citi-tr-89-3.pd

    Usage refinement for ER-to-relation design transformations

    Full text link
    Database schema refinement based on usage is proposed as a useful next step in a practical database design methodology founded upon entity-relationship (ER) conceptual modeling and transformation to normalized relations. A simple cost model is defined and applied to several examples and a case study, illustrating the important trade-offs among query and update costs, storage requirements, and degree of normalization with its data integrity implications.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/29289/1/0000350.pd

    Dependability and Performance Measures for the Database Practitioner

    Full text link
    We estimate the availability, reliability, and mean transaction time (response time) for repairable database configurations, centralized or distributed, in which each service component is continuously available for repair. Reliability, the probability that the entire transaction can execute properly without failure, is computed as a function of mean time to failure (MTTF) and mean time to repair (MTTR). Mean transaction time in the system is a function of the mean service delay time for the transaction over all components, plus restart delays due to component failures, plus queuing delays for contention. These estimates are potentially applicable to more generalized distributed systems.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/83501/1/1998.IEEE(2).pd

    A knowledge-based approach to multiple query processing

    Full text link
    The collective processing of multiple queries in a database system has recently received renewed attention due to its capability of improving the overall performance of a database system and its applicability to the design of knowledge-based expert systems and extensible database systems. A new multiple query processing strategy is presented which utilizes semantic knowledge on data integrity and information on predicate conditions of the access paths (plans) of queries. The processing of multiple queries is accomplished by the utilization of subset relationships between intermediate results of query executions, which are inferred employing both semantic and logical information. Given a set of fixed order access plans, the A* algorithm is used to find the set of reformulated access plans which is optimal for a given collection of semantic knowledge.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/28071/1/0000514.pd

    NetMod: A Design Tool for Large-Scale Heterogeneous Campus Networks

    Full text link
    The Network Modeling Tool (NetMod) uses simple analytical models to provide the designers of large interconnected local area networks with an in-depth analysis of the potential performance of these systems. This tool can be used in either a university, industrial, or governmental campus networking environment consisting of thousands of computer sites. NetMod is implemented with a combination of the easy-to-use Macintosh software packages HyperCard and Excel. The objectives of NetMod, the analytical models, and the user interface are described in detail along with its application to an actual campus-wide network.http://deepblue.lib.umich.edu/bitstream/2027.42/107971/1/citi-tr-90-1.pd

    User Profile and Workload Analysis for Local Area Networks

    Full text link
    Performance analysis tools for computer networks need accurate and comprehensive estimates of user workload. An approach is presented that estimates network impact for a wide variety of end user types and applications that are typical on local area networks. Fourteen user types and nine generic application types are defined, and data is collected to determine the average network bandwidth needed to accommodate the output of individual and aggregate user/application combinations. Workload is estimated using a combination of data obtained from live test experiments, and data collected from the literature. Finally, the implementation of this data in a highly interactive network modeling tool (NetMod) is illustrated with screen images generated during tool execution.http://deepblue.lib.umich.edu/bitstream/2027.42/107870/1/citi-tr-90-3.pd

    Time sequence ordering extensions to the Entity-Relationship model and their application to the automated manufacturing process

    Full text link
    New extensions to the entity-relationship (E-R) model have been developed to represent time sequencing and ordering aspects of information flow, and to represent the integration of control (programming) information into a database. The model constructs specify an implementation ordering of records in a relational database table. Time sequencing refers to an implementation of process information flow as a result of this ordering. The modeling constructs are needed to more completely model process recipe information flow in a typical automated manufacturing facility. The development of these E-R extensions is pursued using a data model for the factory of the future as a motivation and development vehicle. The extensions are defined formally and possible variations of the constructs are given. Further, the incorporation of the constructs into the existing E-R model semantics and the transformation of these extensions to ordering properties and integrity constraints on a corresponding database implementation is discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/29158/1/0000203.pd

    Analysis of X.500 Distributed Directory Refresh Strategies

    Full text link
    Distributed database directory refresh strategies, commonly recommended for the X.500 standard, are defined and analytically modeled for variations on push/pull and total/differential under idealistic asynchronous control conditions. The models are implemented in a HyperCard-based tool called DirMod (for "directory model"). Experimental test results show important elapsed time performance tradeoff among the different strategies, and live test data contribute to the verification of the models.http://deepblue.lib.umich.edu/bitstream/2027.42/107872/1/citi-tr-90-6.pd
    • …
    corecore